Due date: Thursday 10/8, 6pm
This assignment will contain two parts:
In this assignment, we'll explore spatial trends evictions in Philadelphia using data from the Eviction Lab and building code violations using data from OpenDataPhilly.
We'll be exploring the idea that evictions can occur as retaliation against renters for reporting code violations. Spatial correlations between evictions and code violations from the City's Licenses and Inspections department can offer some insight into this question.
A couple of interesting background readings:
The Eviction Lab built the first national database for evictions. If you aren't familiar with the project, you can explore their website: https://evictionlab.org/
geopandas¶The first step is to read the eviction data by census tract using geopandas. The data for all of Pennsylvania by census tract can be downloaded in a GeoJSON format using the following url:
https://eviction-lab-data-downloads.s3.amazonaws.com/PA/tracts.geojson
A browser-friendly version of the data is available here: https://data-downloads.evictionlab.org/
import numpy as np
from matplotlib import pyplot as plt
import pandas as pd
import geopandas as gpd
import hvplot.pandas
import holoviews as hv
hv.extension("bokeh")
#read eviction file
eviction = gpd.read_file('F:/MUSA/MUSA550/assignment-3/assignment-3-master/data/tracts.geojson')
eviction.head()
We will need to trim data to Philadelphia only. Take a look at the data dictionary for the descriptions of the various columns: https://eviction-lab-data-downloads.s3.amazonaws.com/DATA_DICTIONARY.txt
Note: the column names are shortened — see the end of the above file for the abbreviations. The numbers at the end of the columns indicate the years. For example, e-16 is the number of evictions in 2016.
Take a look at the individual columns and trim to census tracts in Philadelphia. (Hint: Philadelphia is both a city and a county).
#trim to census tracts in Philadelphia
evict_philly = eviction.loc[eviction["pl"] == "Philadelphia County, Pennsylvania"].copy()
evict_philly.head()
For this assignment, we are interested in the number of evictions by census tract for various years. Right now, each year has it's own column, so it will be easiest to transform to a tidy format.
Use the pd.melt() function to transform the eviction data into tidy format, using the number of evictions from 2003 to 2016.
The tidy data frame should have four columns: GEOID, geometry, a column holding the number of evictions, and a column telling you what the name of the original column was for that value.
Hints:
GEOID and geometry columns as the id_vars. This will keep track of the census tract information. value_vars.value_vars = ['e-{:02d}'.format(x) for x in range(3, 17)]
#trim neccessary columns
value_vars = ['e-{:02d}'.format(x) for x in range(3, 17)]
value_vars.extend(["GEOID","geometry"])
evict_t = evict_philly[value_vars]
#transform the eviction data into tidy format
evict_p = pd.melt(evict_t, id_vars=["GEOID","geometry"], value_name="Number of evictions",var_name='Year')
#rename the column
key = ['e-{:02d}'.format(x) for x in range(3, 17)]
year = [i for i in range(2003,2017)]
replace_year = dict(zip(key,year))
evict_p['Year'].replace(replace_year,inplace=True)
evict_p.head()
Use hvplot to plot the total number of evictions from 2003 to 2016. You will first need to perform a group by operation and sum up the total number of evictions for all census tracts, and then use hvplot() to make your plot.
You can use any type of hvplot chart you'd like to show the trend in number of evictions over time.
#group by year and count number of evictions
evict_sum = evict_p.groupby(['Year'])['Number of evictions'].sum()
plot1 = evict_sum.hvplot(kind='line')
plot1
Our tidy data frame is still a GeoDataFrame with a geometry column, so we can visualize the number of evictions for all census tracts.
Use hvplot() to generate a choropleth showing the number of evictions for a specified year, with a widget dropdown to select a given year (or variable name, e.g., e-16, e-15, etc).
Hints
groupby keyword to tell hvplot to make a series of maps, with a widget to select between them.dynamic=False as a keyword argument to the hvplot() function. width and height that makes your output map (roughly) square to limit distortions#plot number of evictions across Philadelphia 2003-2016
evict_p.hvplot(c='Number of evictions',
geo=True,
frame_width=500,
frame_height=500,
groupby="Year",
dynamic=False,
cmap='viridis',
title='The number of evictions across Philadelphia')
Next, we'll explore data for code violations from the Licenses and Inspections Department of Philadelphia to look for potential correlations with the number of evictions.
L+I violation data for years including 2012 through 2016 (inclusive) is provided in a CSV format in the "data/" folder.
Load the data using pandas and convert to a GeoDataFrame.
#read csv data and add geometry
violation = pd.read_csv('F:/MUSA/MUSA550/assignment-3/assignment-3-master/data/li_violations.csv')
violation = violation.dropna(subset=['lat', 'lng'])
violation['Coordinates'] = gpd.points_from_xy(violation['lng'], violation['lat'])
violation = gpd.GeoDataFrame(violation,
geometry="Coordinates",
crs="EPSG:4326")
violation.head()
There are many different types of code violations (running the nunique() function on the violationdescription column will extract all of the unique ones). More information on different types of violations can be found on the City's website.
Below, I've selected 15 types of violations that deal with property maintenance and licensing issues. We'll focus on these violations. The goal is to see if these kinds of violations are correlated spatially with the number of evictions in a given area.
Use the list of violations given to trim your data set to only include these types.
violation_types = [
"INT-PLMBG MAINT FIXTURES-RES",
"INT S-CEILING REPAIR/MAINT SAN",
"PLUMBING SYSTEMS-GENERAL",
"CO DETECTOR NEEDED",
"INTERIOR SURFACES",
"EXT S-ROOF REPAIR",
"ELEC-RECEPTABLE DEFECTIVE-RES",
"INT S-FLOOR REPAIR",
"DRAINAGE-MAIN DRAIN REPAIR-RES",
"DRAINAGE-DOWNSPOUT REPR/REPLC",
"LIGHT FIXTURE DEFECTIVE-RES",
"LICENSE-RES SFD/2FD",
"ELECTRICAL -HAZARD",
"VACANT PROPERTIES-GENERAL",
"INT-PLMBG FIXTURES-RES",
]
# Trim to specific violation types
violation_t = violation.loc[violation["violationdescription"].isin(violation_types)].copy()
violation_t.head()
The code violation data is point data. We can get a quick look at the geographic distribution using matplotlib and the hexbin() function. Make a hex bin map of the code violations and overlay the census tract outlines.
Hints:
# convert to same CRS
evict_3857 = evict_t[['geometry','GEOID']].to_crs(epsg=3857)
violation_3857 = violation_t.to_crs(epsg=3857)
fig, ax = plt.subplots(figsize=(18, 12))
# Extract out the x/y coordindates of the Point objects
xcoords = violation_3857.Coordinates.x
ycoords = violation_3857.Coordinates.y
# Plot a hexbin chart
hex_vals = ax.hexbin(xcoords, ycoords, gridsize=50)
# Add the tracts
evict_3857.plot(ax=ax, facecolor="none", edgecolor="white",alpha=0.75, linewidth=0.25)
# Get the limits of the GeoDataFrame
xmin, ymin, xmax, ymax = evict_3857.total_bounds
# Set the xlims and ylims
ax.set_xlim(xmin, xmax)
ax.set_ylim(ymin, ymax)
# add a colorbar and format
fig.colorbar(hex_vals, ax=ax)
ax.set_axis_off()
ax.set_aspect("equal")
ax.set_title("Violation in Philadelphia 2012-2016")
To do a census tract comparison to our eviction data, we need to find which census tract each of the code violations falls into. Use the geopandas.sjoin() function to do just that.
Hints
geometry column (specifying census tract polygons) and the GEOID column (specifying the name of each census tract).#Spatially join data sets
joined = gpd.sjoin(violation_3857, evict_3857, op='within', how='right')
joined['GEOID'] = joined['GEOID'].astype('category')
len(joined['GEOID'].unique())
Next, we'll want to find the number of violations (for each kind) per census tract. You should group the data frame by violation type and census tract name.
The result of this step should be a data frame with three columns: violationdescription, GEOID, and N, where N is the number of violations of that kind in the specified census tract.
Optional: to make prettier plots
Some census tracts won't have any violations, and they won't be included when we do the above calculation. However, there is a trick to set the values for those census tracts to be zero. After you calculate the sizes of each violation/census tract group, you can run:
N = N.unstack(fill_value=0).stack().reset_index(name='N')
where N gives the total size of each of the groups, specified by violation type and census tract name.
See this StackOverflow post for more details.
This part is optional, but will make the resulting maps a bit prettier.
vio_g = joined.groupby(['GEOID','violationdescription']).size().unstack(fill_value=0).stack().reset_index(name=0).rename(columns={0:'N'})
len(vio_g['GEOID'].unique())
We now have the number of violations of different types per census tract specified as a regular DataFrame. You can now merge it with the census tract geometries (from your eviction data GeoDataFrame) to create a GeoDataFrame.
Hints
pandas.merge() and specify the on keyword to be the column holding census tract names. pandas.merge() function.total = evict_3857.merge(vio_g, on='GEOID')
total.head()
Now, we can use hvplot() to create an interactive choropleth for each violation type and add a widget to specify different violation types.
Hints
groupby keyword to tell hvplot to make a series of maps, with a widget to select different violation types.dynamic=False as a keyword argument to the hvplot() function. width and height that makes your output map (roughly) square to limit distortions#plot interactive choropleths for each violation type
total=total.to_crs(epsg=4326)
total.hvplot(c='N',
geo=True,
frame_width=500,
frame_height=500,
groupby="violationdescription",
dynamic=False,
cmap='viridis')
From the interactive maps of evictions and violations, you should notice a lot of spatial overlap.
As a final step, we'll make a side-by-side comparison to better show the spatial correlations. This will involve a few steps:
hvplot() to make two interactive choropleth maps, one for the data from step 1. and one for the data in step 2.Note: since we selected a single year and violation type, you won't need to use the groupby= keyword here.
#Trim the data frame plotted in section 1.1.5 to only include evictions from 2016.
evict_2016 = evict_p.loc[evict_p['Year']==2016].reset_index()
#Trim the data frame plotted in section 1.2.7 to only include a single violation type (pick whichever one you want!)
vio_vp = total.loc[total['violationdescription']=="VACANT PROPERTIES-GENERAL"].reset_index()
#Use hvplot() to make two interactive choropleth maps, one for the data from step 1. and one for the data in step 2.
plot1=evict_2016.hvplot(c='Number of evictions',
geo=True,
frame_width=500,
frame_height=500,
dynamic=False,
cmap='viridis')
plot1
plot2=vio_vp.hvplot(c='N',
geo=True,
frame_width=500,
frame_height=500,
dynamic=False,
cmap='viridis')
plot2
#Show these two plots side by side (one row and 2 columns) using the syntax for combining charts.
plot1=plot1.relabel('Number of evictions 2016')
plot2=plot2.relabel('Vacant properties general')
combined = plot1 + plot2
combined.cols(2)
Identify the 20 most common types of violations within the time period of 2012 to 2016 and create a set of interactive choropleths similar to what was done in section 1.2.7.
Use this set of maps to identify 3 types of violations that don't seem to have much spatial overlap with the number of evictions in the City.
#trim the table with top20 violations
top20 = violation.groupby('violationdescription').size()
top20 = top20.sort_values(ascending=False)
top20 = top20.iloc[:20].reset_index()
list20 = top20['violationdescription'].values.tolist()
vio20 = violation.loc[violation["violationdescription"].isin(list20)].copy()
list20
#spatial join
vio20_3857=vio20.to_crs(epsg=3857)
joined1 = gpd.sjoin(vio20_3857, evict_3857, op='within', how='right')
joined1['GEOID'] = joined1['GEOID'].astype('category')
len(joined1['GEOID'].unique())
#Calculate the number of violations by type per census tract¶
vio20_g = joined1.groupby(['GEOID','violationdescription']).size().unstack(fill_value=0).stack().reset_index(name=0).rename(columns={0:'N'})
len(vio20_g['GEOID'].unique())
#merge the violation data with tracts
total1 = evict_3857.merge(vio20_g, on='GEOID')
total.head()
total1=total1.to_crs(epsg=4326)
total1.hvplot(c='N',
geo=True,
frame_width=500,
frame_height=500,
groupby="violationdescription",
dynamic=False,
cmap='viridis')
3 types of violations that don't seem to have much spatial overlap with the number of evictions in the City
1.EXT A-CLEAN RUBBISH/GARBAGE
2.LICENSE-RES GENERAL
3.VIOL C&I MESSAGE
In this part, we'll explore the NDVI in Philadelphia a bit more. This part will include two parts:
Use rasterio to load the landsat data for Philadelphia (available in the "data/" folder)
import rasterio as rio
# Open the file
landsat = rio.open("F:/MUSA/MUSA550/assignment-3/assignment-3-master/data/landsat8_philly.tif")
# Read the file
data = landsat.read(1)
Create two polygon objects, one for the city limits and one for the suburbs. To calculate the suburbs polygon, we will take everything outside the city limits but still within the bounding box.
#city polygon
city_limits = gpd.read_file("F:/MUSA/MUSA550/assignment-3/assignment-3-master/data/City_Limits.geojson")
#envelop polygon
box = city_limits.envelope
#suburb polygon
suburb = box.difference(city_limits)
Using the two polygons from the last section, use rasterio's mask functionality to create two masked arrays from the landsat data, one for the city and one for the suburbs.
For each masked array, calculate the NDVI.
from rasterio.mask import mask
import matplotlib.colors as mcolors
#convert CRS
city_limits = city_limits.to_crs(landsat.crs.data['init'])
#mask for city
city_masked, mask_transform = mask(
dataset=landsat,
shapes=city_limits.geometry,
crop=True,
all_touched=True,
filled=False,
)
#convert CRS
sub_limits = suburb.to_crs(landsat.crs.data['init'])
#mask for suburb
sub_masked, mask_transform = mask(
dataset=landsat,
shapes=sub_limits.geometry,
crop=True,
all_touched=True,
filled=False,
)
#NDVI founction
def calculate_NDVI(nir, red):
"""
Calculate the NDVI from the NIR and red landsat bands
"""
# Convert to floats
nir = nir.astype(float)
red = red.astype(float)
# Get valid entries
check = np.logical_and( red.mask == False, nir.mask == False )
# Where the check is True, return the NDVI, else return NaN
ndvi = np.where(check, (nir - red ) / ( nir + red ), np.nan )
return ndvi
#calculate NDVI
NDVI_city = calculate_NDVI(city_masked[4], city_masked[3])
NDVI_sub = calculate_NDVI(sub_masked[4], sub_masked[3])
#plot NDVI in Philadelphia City
fig, ax = plt.subplots(figsize=(10,10))
# The extent of the data
landsat_extent = [
landsat.bounds.left,
landsat.bounds.right,
landsat.bounds.bottom,
landsat.bounds.top,
]
# Plot NDVI
img = ax.imshow(NDVI_city, extent=landsat_extent)
# Format and plot city limits
city_limits.plot(ax=ax, edgecolor='pink', facecolor='none', linewidth=4)
plt.colorbar(img)
ax.set_axis_off()
ax.set_title("NDVI in Philadelphia City", fontsize=18);
#plot NDVI in Philadelphia suburbs
fig, ax = plt.subplots(figsize=(10,10))
# Plot NDVI
img = ax.imshow(NDVI_sub, extent=landsat_extent)
# Format and plot city limits
sub_limits.plot(ax=ax, edgecolor='pink', facecolor='none', linewidth=4)
plt.colorbar(img)
ax.set_axis_off()
ax.set_title("NDVI in Philadelphia Suburbs", fontsize=18);
nanmedian function will be useful for ignoring NaN elementsMNDVI_city=np.nanmedian(NDVI_city)
MNDVI_sub=np.nanmedian(NDVI_sub)
print(MNDVI_city)
print(MNDVI_sub)
Suburbs have higher median NDVI
The data is available in the "data/" folder. It has been downloaded from OpenDataPhilly. It contains the locations of abot 2,500 street trees in Philadelphia.
#read the file
tree = gpd.read_file('F:/MUSA/MUSA550/assignment-3/assignment-3-master/data/ppr_tree_canopy_points_2015.geojson')
tree.head()
#convert CRS
tree = tree.to_crs(landsat.crs.data['init'])
#calculate NDVI values with zonal_stats
from rasterstats import zonal_stats
tree_stats = zonal_stats(tree, NDVI_city, affine=landsat.transform, stats=[ 'median'])
tree_stats
Make two plots of the results:
hist function. Include a vertical line that marks the NDVI = 0 thresholdplot function. Include the city limits boundary on your plot.The figures should be clear and well-styled, with for example, labels for axes, legends, and clear color choices.
# Store the median value in the tree data frame
tree['median_NDVI'] = [s['median'] for s in tree_stats]
tree.head()
#plot hist
fig, ax = plt.subplots(figsize=(10, 6))
ax.hist(tree["median_NDVI"],30, density=True, facecolor='g', alpha=0.75)
# Format the axes
ax.set_xlabel("Median NDVI")
ax.set_ylabel("Number of trees")
ax.set_axisbelow(True)
ax.grid(True, axis="y",alpha=0.5,linestyle="dotted")
ax.set_yticklabels([f"{yval:,.0f}" for yval in ax.get_yticks()] )
ax.set_title("Median NDVI of Trees in Philadelphia")
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
# Add the NDVI = 0 threshold
ax.axvline(x=0, c='k', linewidth=2)
ax.text(0, 3, " NDVI = 0", ha='left', fontsize=14);
import contextily as ctx
#A plot of the street tree points, colored by the NDVI values
# create the axes
fig, ax = plt.subplots(figsize=(14, 14))
# add background tracts
evict_3857.to_crs(tree.crs).plot(ax=ax, facecolor="none", edgecolor="white",alpha=0.75, linewidth=0.25)
# plot trees
tree.plot(ax=ax,column='median_NDVI', marker='.',legend=True)
# add the city limits
city_limits.to_crs(tree.crs).plot(ax=ax, edgecolor='white', linewidth=3, facecolor='none')
# plot the basemap underneath
ctx.add_basemap(ax=ax, crs=tree.crs, source=ctx.providers.CartoDB.DarkMatter)
# remove axis lines
ax.set_axis_off()
ax.set_title("Median NDVI of Trees in Philadelphia")